282 research outputs found

    Truncated Schwinger-Dyson Equations and Gauge Covariance in QED3

    Full text link
    We study the Landau-Khalatnikov-Fradkin transformations (LKFT) in momentum space for the dynamically generated mass function in QED3. Starting from the Landau gauge results in the rainbow approximation, we construct solutions in other covariant gauges. We confirm that the chiral condensate is gauge invariant as the structure of the LKFT predicts. We also check that the gauge dependence of the constituent fermion mass is considerably reduced as compared to the one obtained directly by solving SDE.Comment: 17 pages, 11 figures. v3. Improved and Expanded. To appear in Few Body System

    Washing scaling of GeneChip microarray expression

    Get PDF
    BACKGROUND Post-hybridization washing is an essential part of microarray experiments. Both the quality of the experimental washing protocol and adequate consideration of washing in intensity calibration ultimately affect the quality of the expression estimates extracted from the microarray intensities. RESULTS We conducted experiments on GeneChip microarrays with altered protocols for washing, scanning and staining to study the probe-level intensity changes as a function of the number of washing cycles. For calibration and analysis of the intensity data we make use of the 'hook' method which allows intensity contributions due to non-specific and specific hybridization of perfect match (PM) and mismatch (MM) probes to be disentangled in a sequence specific manner. On average, washing according to the standard protocol removes about 90% of the non-specific background and about 30-50% and less than 10% of the specific targets from the MM and PM, respectively. Analysis of the washing kinetics shows that the signal-to-noise ratio doubles roughly every ten stringent washing cycles. Washing can be characterized by time-dependent rate constants which reflect the heterogeneous character of target binding to microarray probes. We propose an empirical washing function which estimates the survival of probe bound targets. It depends on the intensity contribution due to specific and non-specific hybridization per probe which can be estimated for each probe using existing methods. The washing function allows probe intensities to be calibrated for the effect of washing. On a relative scale, proper calibration for washing markedly increases expression measures, especially in the limit of small and large values. CONCLUSIONS Washing is among the factors which potentially distort expression measures. The proposed first-order correction method allows direct implementation in existing calibration algorithms for microarray data. We provide an experimental 'washing data set' which might be used by the community for developing amendments of the washing correction.This publication is supported by the Leipzig Interdisciplinary Research Cluster of Genetic Factors, Clinical Phenotypes and Environment (LIFE Center, Universität Leipzig) and an Australian Academy of Science Visits to Europe grant. LIFE is funded by means of the European Union, by the European Regional Development Fund (ERFD) and by means of the Free State of Saxony within the framework of the excellence initiative

    A Revised Design for Microarray Experiments to Account for Experimental Noise and Uncertainty of Probe Response

    Get PDF
    Background Although microarrays are analysis tools in biomedical research, they are known to yield noisy output that usually requires experimental confirmation. To tackle this problem, many studies have developed rules for optimizing probe design and devised complex statistical tools to analyze the output. However, less emphasis has been placed on systematically identifying the noise component as part of the experimental procedure. One source of noise is the variance in probe binding, which can be assessed by replicating array probes. The second source is poor probe performance, which can be assessed by calibrating the array based on a dilution series of target molecules. Using model experiments for copy number variation and gene expression measurements, we investigate here a revised design for microarray experiments that addresses both of these sources of variance. Results Two custom arrays were used to evaluate the revised design: one based on 25 mer probes from an Affymetrix design and the other based on 60 mer probes from an Agilent design. To assess experimental variance in probe binding, all probes were replicated ten times. To assess probe performance, the probes were calibrated using a dilution series of target molecules and the signal response was fitted to an adsorption model. We found that significant variance of the signal could be controlled by averaging across probes and removing probes that are nonresponsive or poorly responsive in the calibration experiment. Taking this into account, one can obtain a more reliable signal with the added option of obtaining absolute rather than relative measurements. Conclusion The assessment of technical variance within the experiments, combined with the calibration of probes allows to remove poorly responding probes and yields more reliable signals for the remaining ones. Once an array is properly calibrated, absolute quantification of signals becomes straight forward, alleviating the need for normalization and reference hybridizations

    Asymptotic behaviour and optimal word size for exact and approximate word matches between random sequences

    Get PDF
    BACKGROUND: The number of k-words shared between two sequences is a simple and effcient alignment-free sequence comparison method. This statistic, D(2), has been used for the clustering of EST sequences. Sequence comparison based on D(2 )is extremely fast, its runtime is proportional to the size of the sequences under scrutiny, whereas alignment-based comparisons have a worst-case run time proportional to the square of the size. Recent studies have tackled the rigorous study of the statistical distribution of D(2), and asymptotic regimes have been derived. The distribution of approximate k-word matches has also been studied. RESULTS: We have computed the D(2 )optimal word size for various sequence lengths, and for both perfect and approximate word matches. Kolmogorov-Smirnov tests show D(2 )to have a compound Poisson distribution at the optimal word size for small sequence lengths (below 400 letters) and a normal distribution at the optimal word size for large sequence lengths (above 1600 letters). We find that the D(2 )statistic outperforms BLAST in the comparison of artificially evolved sequences, and performs similarly to other methods based on exact word matches. These results obtained with randomly generated sequences are also valid for sequences derived from human genomic DNA. CONCLUSION: We have characterized the distribution of the D(2 )statistic at optimal word sizes. We find that the best trade-off between computational efficiency and accuracy is obtained with exact word matches. Given that our numerical tests have not included sequence shuffling, transposition or splicing, the improvements over existing methods reported here underestimate that expected in real sequences. Because of the linear run time and of the known normal asymptotic behavior, D(2)-based methods are most appropriate for large genomic sequences

    Experimental Comparison and Evaluation of the Affymetrix Exon and U133Plus2 GeneChip Arrays

    Get PDF
    Affymetrix exon arrays offer scientists the only solution for exon-level expression profiling at the whole-genome scale on a single array. These arrays feature a new chip design with no mismatch probes and a radically new random primed protocol to generate sense DNA targets along the entire length of the transcript. In addition to these changes, a limited number of validating experiments and virtually no experimental data to rigorously address the comparability of all-exon arrays with conventional 3'-arrays result in a natural reluctance to replace conventional expression arrays with the new all-exon platform.Using commercially available Affymetrix arrays, we assess the performance of the Human Exon 1.0 ST (HuEx) and U133 Plus 2.0 (U133Plus2) platforms directly through a series of 'spike-in' hybridizations containing 25 transcripts in the presence of a fixed eukaryotic background. Specifically, we compare the measures of expression for HuEx and U133Plus2 arrays to evaluate the precision of these measures as well as the specificity and sensitivity of the measures' ability to detect differential expression.This study presents an experimental comparison and systematic cross-validation of Affymetrix exon arrays and establishes high comparability of expression changes and probe performance characteristics between Affymetrix conventional and exon arrays. In addition, this study offers a reliable benchmark data set for the comparison of competing exon expression measures, the selection of methods suitable for mapping exon array measures to the wealth of previously generated microarray data, as well as the development of more advanced methods for exon- and transcript-level expression summarization

    3D correlative light and electron microscopy of cultured cells using serial blockface scanning electron microscopy

    Get PDF
    The processes of life take place in multiple dimensions, but imaging these processes in even three dimensions is challenging. Here, we describe a workflow for 3D correlative light and electron microscopy (CLEM) of cell monolayers using fluorescence microscopy to identify and follow biological events, combined with serial blockface scanning electron microscopy to analyse the underlying ultrastructure. The workflow encompasses all steps from cell culture to sample processing, imaging strategy, and 3D image processing and analysis. We demonstrate successful application of the workflow to three studies, each aiming to better understand complex and dynamic biological processes, including bacterial and viral infections of cultured cells and formation of entotic cell-in-cell structures commonly observed in tumours. Our workflow revealed new insight into the replicative niche of Mycobacterium tuberculosis in primary human lymphatic endothelial cells, HIV-1 in human monocytederived macrophages, and the composition of the entotic vacuole. The broad application of this 3D CLEM technique will make it a useful addition to the correlative imaging toolbox for biomedical research

    "Hook"-calibration of GeneChip-microarrays: Theory and algorithm

    Get PDF
    Abstract Background: The improvement of microarray calibration methods is an essential prerequisite for quantitative expression analysis. This issue requires the formulation of an appropriate model describing the basic relationship between the probe intensity and the specific transcript concentration in a complex environment of competing interactions, the estimation of the magnitude these effects and their correction using the intensity information of a given chip and, finally the development of practicable algorithms which judge the quality of a particular hybridization and estimate the expression degree from the intensity values. Results: We present the so-called hook-calibration method which co-processes the log-difference (delta) and -sum (sigma) of the perfect match (PM) and mismatch (MM) probe-intensities. The MM probes are utilized as an internal reference which is subjected to the same hybridization law as the PM, however with modified characteristics. After sequence-specific affinity correction the method fits the Langmuir-adsorption model to the smoothed delta-versus-sigma plot. The geometrical dimensions of this so-called hook-curve characterize the particular hybridization in terms of simple geometric parameters which provide information about the mean non-specific background intensity, the saturation value, the mean PM/MM-sensitivity gain and the fraction of absent probes. This graphical summary spans a metrics system for expression estimates in natural units such as the mean binding constants and the occupancy of the probe spots. The method is single-chip based, i.e. it separately uses the intensities for each selected chip. Conclusion: The hook-method corrects the raw intensities for the non-specific background hybridization in a sequence-specific manner, for the potential saturation of the probe-spots with bound transcripts and for the sequence-specific binding of specific transcripts. The obtained chip characteristics in combination with the sensitivity corrected probe-intensity values provide expression estimates scaled in natural units which are given by the binding constants of the particular hybridization.</p

    Current quark mass dependence of nucleon magnetic moments and radii

    Full text link
    A calculation of the current-quark-mass-dependence of nucleon static electromagnetic properties is necessary in order to use observational data as a means to place constraints on the variation of Nature's fundamental parameters. A Poincare' covariant Faddeev equation, which describes baryons as composites of confined-quarks and -nonpointlike-diquarks, is used to calculate this dependence The results indicate that, like observables dependent on the nucleons' magnetic moments, quantities sensitive to their magnetic and charge radii, such as the energy levels and transition frequencies in Hydrogen and Deuterium, might also provide a tool with which to place limits on the allowed variation in Nature's constants.Comment: 23 pages, 2 figures, 4 tables, 4 appendice

    Burden of disease in Thailand: changes in health gap between 1999 and 2004

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Continuing comprehensive assessment of population health gap is essential for effective health planning. This paper assessed changes in the magnitude and pattern of disease burden in Thailand between 1999 and 2004. It further drew lessons learned from applying the global burden of disease (GBD) methods to the Thai context for other developing country settings.</p> <p>Methods</p> <p>Multiple sources of mortality and morbidity data for both years were assessed and used to estimate Disability-Adjusted Life Years (DALYs) loss for 110 specific diseases and conditions relevant to the country's health problems. Causes of death from national vital registration were adjusted for misclassification from a verbal autopsy study.</p> <p>Results</p> <p>Between 1999 and 2004, DALYs loss per 1,000 population in 2004 slightly decreased in men but a minor increase in women was observed. HIV/AIDS maintained the highest burden for men in both 1999 and 2004 while in 2004, stroke took over the 1999 first rank of HIV/AIDS in women. Among the top twenty diseases, there was a slight increase of the proportion of non-communicable diseases and two out of three infectious diseases revealed a decrease burden except for lower respiratory tract infections.</p> <p>Conclusion</p> <p>The study highlights unique pattern of disease burden in Thailand whereby epidemiological transition have occurred as non-communicable diseases were on the rise but burden from HIV/AIDS resulting from the epidemic in the 1990s remains high and injuries show negligent change. Lessons point that assessing DALY over time critically requires continuing improvement in data sources particularly on cause of death statistics, institutional capacity and long term commitments.</p

    Accurate Estimates of Microarray Target Concentration from a Simple Sequence-Independent Langmuir Model

    Get PDF
    Background: Microarray technology is a commonly used tool for assessing global gene expression. Many models for estimation of target concentration based on observed microarray signal have been proposed, but, in general, these models have been complex and platform-dependent. Principal Findings: We introduce a universal Langmuir model for estimation of absolute target concentration from microarray experiments. We find that this sequence-independent model, characterized by only three free parameters, yields excellent predictions for four microarray platforms, including Affymetrix, Agilent, Illumina and a custom-printed microarray. The model also accurately predicts concentration for the MAQC data sets. This approach significantly reduces the computational complexity of quantitative target concentration estimates. Conclusions: Using a simple form of the Langmuir isotherm model, with a minimum of parameters and assumptions, and without explicit modeling of individual probe properties, we were able to recover absolute transcript concentrations with high R 2 on four different array platforms. The results obtained here suggest that with a ‘‘spiked-in’ ’ concentration serie
    • …
    corecore